Docker : Docker Swarm
2017/07/30 |
Configure Docker Swarm to create Docker Cluster with multiple Docker nodes.
On this example, Configure Swarm Cluster with 3 Docker nodes like follows.
There are 2 roles on Swarm Cluster, those are [Manager nodes] and [Worker nodes]. This example shows to set those roles like follows. -----------+---------------------------+--------------------------+------------ | | | eth0|10.0.0.51 eth0|10.0.0.52 eth0|10.0.0.53 +----------+-----------+ +-----------+----------+ +-----------+----------+ | [ node01.srv.world ] | | [ node02.srv.world ] | | [ node03.srv.world ] | | Manager | | Worker | | Worker | +----------------------+ +----------------------+ +----------------------+ |
[1] |
This example is based on the environment that SELnux is Permissive or Disabled and also Firewalld is disabled.
|
[2] | |
[3] | Configure Swarm Cluster on Manager Node. |
[root@node01 ~]# docker swarm init Swarm initialized: current node (lq0eemyzbeu3h4knzr17fax24) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-4gx8p7ssh1s16wbfmtioxdd4cq51hrnqkb8jc1okcgoeogf88j-b3jz4624lysm7p5fbu632cqwq \ 10.0.0.51:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. |
[4] | Join in Swarm Cluster on all Worker Nodes. It's OK to run the command which was shown when running swarm init on Manager Node. |
[root@node02 ~]# docker swarm join \ --token SWMTKN-1-4gx8p7ssh1s16wbfmtioxdd4cq51hrnqkb8jc1okcgoeogf88j-b3jz4624lysm7p5fbu632cqwq \ 10.0.0.51:2377 This node joined a swarm as a worker. |
[5] | Verify with a command [node ls] that worker nodes could join in Cluster normally. |
[root@node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS lq0eemyzbeu3h4knzr17fax24 * node01.srv.world Ready Active Leader mi4u40lwmp9lxwaatkeiukmte node03.srv.world Ready Active nl3g3tvusrdf9kxavpfy73ges node02.srv.world Ready Active |
[6] |
After creating Swarm Cluster, next, configure services that the Swarm Cluster provides.
Create the same container image on all Nodes for the service first. On this exmaple, use a Container image which provides http service like an example of the link. |
[7] | Configure service on Manager Node. After successing to configure service, access to the Manager node's Hostname or IP address to verify it works normally. By the way, requests to worker nodes are load-balanced with round-robin like follows. |
[root@node01 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE web_server latest 3c04799a039b 40 minutes ago 461 MB registry.fedoraproject.org/fedora latest 7f17e6b4a386 4 weeks ago 232 MB # create a service with 2 repricas [root@node01 ~]# docker service create --name swarm_cluster --replicas=2 -p 80:80 web_server:latest 7xg4yssy516xwgkjx2vxbw05d # show service list [root@node01 ~]# docker service ls ID NAME MODE REPLICAS IMAGE 5divuvy0wzeh swarm_cluster replicated 2/2 web_server:latest # inspect the service [root@node01 ~]# docker service inspect swarm_cluster --pretty ID: 5divuvy0wzeh0og4cqyjoak28 Name: swarm_cluster Service Mode: Replicated Replicas: 2 Placement: UpdateConfig: Parallelism: 1 On failure: pause Max failure ratio: 0 ContainerSpec: Image: web_server:latest Resources: Endpoint Mode: vip Ports: PublishedPort 80 Protocol = tcp TargetPort = 80 # show service state [root@node01 ~]# docker service ps swarm_cluster ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR.. c6nzr.. swarm_cluster.1 web_server:latest node01.srv.world Running Running 30 seconds ago sqs7k.. swarm_cluster.2 web_server:latest node03.srv.world Running Running 30 seconds ago # verify it works normally [root@node01 ~]# curl http://node01.srv.world/ node03 [root@node01 ~]# curl http://node01.srv.world/ node01 [root@node01 ~]# curl http://node01.srv.world/ node03 [root@node01 ~]# curl http://node01.srv.world/ node01 |
[8] | If you'd like to change the number of repricas, configure like follows. |
# change repricas to 3 [root@node01 ~]# docker service scale swarm_cluster=3 swarm_cluster scaled to 3 [root@node01 ~]# docker service ps swarm_cluster ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR.. c6nzr.. swarm_cluster.1 web_server:latest node01.srv.world Running Running 2 minutes ago sqs7k.. swarm_cluster.2 web_server:latest node03.srv.world Running Running 2 minutes ago 442hb.. swarm_cluster.3 web_server:latest node02.srv.world Running Preparing 4 seconds ago # verify working [root@node01 ~]# curl http://node01.srv.world/ node02 [root@node01 ~]# curl http://node01.srv.world/ node03 [root@node01 ~]# curl http://node01.srv.world/ node01 |